Skin Lesion Classification: ResNet18 (Experimentation)

This is one of the models that were implemented during the experimentation stage. This model was not implemented in the end due to problems with saving and loading it into the backend of the web application.

This algorithm uses ResNet18 neural network and is a modified version of a LeNet skin cancer detection algorithm originally by Soham Mazumder.

For this skin cancer detection algorithm HAM10000 ("Human Against Machine with 10000 training images") dataset was used which contains 10,015 dermatoscopic images.

The 7 classes of skin cancer lesions included in this dataset are:

  1. Melanocytic nevi
  2. Melanoma
  3. Benign keratosis-like lesions
  4. Basal cell carcinoma
  5. Actinic keratoses
  6. Vascular lesions
  7. Dermatofibroma
Originally by Soham Mazumder
GitHub Link: https://github.com/IFL-CAMP/SLClassificationAnEducationalCode-MEC2019

Data Preperation & Import

Plotting the dataset distribution

Sorting the dataset

Commented out function below takes the the HAM10000 dataset in its original form (data/HAM10000) and sorts all the images based on the type of skin cancer (data/HAM10K). It is commented out as it is already done and does not need to be run again.

Visualizing the images

Preparing the Model: Train, Test & Validation

Dataset is divided into 3 parts:

The splitting is done class wise so that there's an equal representation of all classes in each subset of the data.

Loading the dataset into memory

Some of the training images

Downloading ResNet50

Defining a Loss function and Optimizer

Classification Cross-Entropy loss equation:

$H_{y'} (y) := - \sum_{i} y_{i}' \log (y_i)$

The most common and effective Optimizer currently used is Adam: Adaptive Moments. You can look here for more information.

Helper functions to evaluate the training process

Training the network

Plotting the training and validation loss curves.

Test the network on the test data

We have trained the network over the training dataset. But we need to check if the network has learnt anything at all.

We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.

Okay, first step. Let us display an image from the test set to get familiar.

Okay, now let us check the performance on the test network:

Confusion Matrix

Grad cam

Saving the model

After training the model is saved